11 research outputs found

    Algorithms for estimating the parameters of factorisation machines

    Get PDF
    Since the introduction of factorisation machines in 2010, it became a popular prediction technique amongst machine learners who applied the method with success in several data science challenges such as Kaggle or KDD Cup. Despite these successes, factorisation machines are not often considered as a modelling technique in business, partly because large companies prefer tried and tested software for model implementation. Popular modelling techniques for prediction problems, such as generalised linear models, neural networks, and classification and regression trees, have been implemented in commercial software such as SAS which is widely used by banks, insurance, pharmaceutical and telecommunication companies. To popularise the use of factorisation machines in business, we implement algorithms for fitting factorisation machines in SAS. These algorithms minimise two loss functions, namely the weighted sum of squared errors and the weighted sum of absolute deviations using coordinate descent and nonlinear programming procedures. Using a simulation study, the above-mentioned routines are tested in terms of accuracy and efficiency. The prediction power of factorisation machines is then illustrated by analysing two data sets

    Advantages of using factorisation machines as a statistical modelling technique

    Get PDF
    Factorisation machines originated from the field of machine learning literature and have gained popularity because of the high accuracy obtained in several prediction problems, in particular in the area of recommender systems. This article will provide a motivation for the use of factorisation machines, discuss the fundamentals of factorisation machines, and provide examples of some applications and the possible gains by using factorisation machines as part of the statistician’s model-building toolkit. Data sets and existing software packages will be used to illustrate how factorisation machines may be fitted and in what context it is worth being used

    The Impact Of PD-LGD Correlation On Expected Loss And Economic Capital

    Get PDF
    The Basel regulatory credit risk rules for expected losses require banks use downturn loss given default (LGD) estimates because the correlation between the probability of default (PD) and LGD is not captured, even though this has been repeatedly demonstrated by empirical research. A model is examined which captures this correlation using empirically-observed default frequencies and simulated LGD and default data of a loan portfolio. The model is tested under various conditions dictated by input parameters. Having established an estimate of the impact on expected losses, it is speculated that the model be calibrated using banks' own loss data to compensate for the omission of correlation dependence. Because the model relies on observed default frequencies, it could be used to adapt in real time, forcing provisions to be dynamically allocated

    A Critical Review Of The Basel Margin Of Conservatism Requirement In A Retail Credit Context

    Get PDF
    The Basel II accord (2006) includes guidelines to financial institutions for the estimation of regulatory capital (RC) for retail credit risk. Under the advanced Internal Ratings Based (IRB) approach, the formula suggested for calculating RC is based on the Asymptotic Risk Factor (ASRF) model, which assumes that a borrower will default if the value of its assets were to fall below the value of its debts. The primary inputs needed in this formula are estimates of probability of default (PD), loss given default (LGD) and exposure at default (EAD). Banks for whom usage of the advanced IRB approach have been approved usually obtain these estimates from complex models developed in-house. Basel II recognises that estimates of PDs, LGDs, and EADs are likely to involve unpredictable errors, and then states that, in order to avoid over-optimism, a bank must add to its estimates a margin of conservatism (MoC) that is related to the likely range of errors. Basel II also requires several other measures of conservatism that have to be incorporated. These conservatism requirements lead to confusion among banks and regulators as to what exactly is required as far as a margin of conservatism is concerned. In this paper, we discuss the ASRF model and its shortcomings, as well as Basel II conservatism requirements. We study the MoC concept and review possible approaches for its implementation. Our overall objective is to highlight certain issues regarding shortcomings inherent to a pervasively used model to bank practitioners and regulators and to potentially offer a less confusing interpretation of the MoC concept

    Construction of Forward-Looking Distributions Using Limited Historical Data and Scenario Assessments

    Get PDF
    Financial institutions are concerned about various forms of risk that might impact them. The management of these institutions has to demonstrate to shareholders and regulators that they manage these risks in a pro-active way. Often the main risks are caused by excessive claims on insurance policies or losses that occur due to defaults on loan payments or by operations failing. In an attempt to quantify these risks, the estimation of extreme quantiles of loss distributions is of interest. Since financial companies have limited historical data available in order to estimate these extreme quantiles, they often use scenario assessments by experts to augment the historical data by providing a forward-looking view. In this chapter, we will provide an exposition of statistical methods that may be used to combine historical data and scenario assessments in order to estimate extreme quantiles. In particular, we will illustrate their use by means of practical examples. This method has been implemented by major international banks and based on what we have learnt in the process, we include some practical suggestions for implementing the recommended method

    The Macroeconomic, Industrial, Distributional and Regional Effects of Government Spending Programs in South Africa

    No full text
    A computable general equilibrium model of the South African economy (IDC-GEM) is outlined. The model is used to analyse the effects on the economy of increases in government spending such as are at the core of the new government's Reconstruction and Development Program. The analysis concentrates on the implications of alternative methods of finance for the program. Results are reported for macroeconomic variables, for the prospects of industries and regions, and for income distribution

    A proposed best practice model validation framework for banks

    Get PDF
    BACKGROUND : With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. SETTING : Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. AIM : Assessing the available literature for the best validation practices. METHODS : This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. RESULTS : We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. CONCLUSION : The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.The Department of Science and Technology (DST) of South Africahttp://www.sajems.orgam2017Mathematics and Applied Mathematic

    The Macroeconomic, Industrial, Distributional and Regional Effects of Government Spending Programs in South Africa

    No full text
    A computable general equilibrium model of the South African economy (IDC-GEM) is outlined. The model is used to analyse the effects on the economy of increases in government spending such as are at the core of the new government's Reconstruction and Development Program. The analysis concentrates on the implications of alternative methods of finance for the program. Results are reported for macroeconomic variables, for the prospects of industries and regions, and for income distribution.economic modelling, South Africa, government spending, income distribution, industrial effects, regional effects, macroeconomic effects

    Critical care admission following elective surgery was not associated with survival benefit: prospective analysis of data from 27 countries

    Get PDF
    This was an investigator initiated study funded by Nestle Health Sciences through an unrestricted research grant, and by a National Institute for Health Research (UK) Professorship held by RP. The study was sponsored by Queen Mary University of London

    The surgical safety checklist and patient outcomes after surgery: a prospective observational cohort study, systematic review and meta-analysis

    Get PDF
    © 2017 British Journal of Anaesthesia Background: The surgical safety checklist is widely used to improve the quality of perioperative care. However, clinicians continue to debate the clinical effectiveness of this tool. Methods: Prospective analysis of data from the International Surgical Outcomes Study (ISOS), an international observational study of elective in-patient surgery, accompanied by a systematic review and meta-analysis of published literature. The exposure was surgical safety checklist use. The primary outcome was in-hospital mortality and the secondary outcome was postoperative complications. In the ISOS cohort, a multivariable multi-level generalized linear model was used to test associations. To further contextualise these findings, we included the results from the ISOS cohort in a meta-analysis. Results are reported as odds ratios (OR) with 95% confidence intervals. Results: We included 44 814 patients from 497 hospitals in 27 countries in the ISOS analysis. There were 40 245 (89.8%) patients exposed to the checklist, whilst 7508 (16.8%) sustained ≥1 postoperative complications and 207 (0.5%) died before hospital discharge. Checklist exposure was associated with reduced mortality [odds ratio (OR) 0.49 (0.32–0.77); P\u3c0.01], but no difference in complication rates [OR 1.02 (0.88–1.19); P=0.75]. In a systematic review, we screened 3732 records and identified 11 eligible studies of 453 292 patients including the ISOS cohort. Checklist exposure was associated with both reduced postoperative mortality [OR 0.75 (0.62–0.92); P\u3c0.01; I2=87%] and reduced complication rates [OR 0.73 (0.61–0.88); P\u3c0.01; I2=89%). Conclusions: Patients exposed to a surgical safety checklist experience better postoperative outcomes, but this could simply reflect wider quality of care in hospitals where checklist use is routine
    corecore